Goto

Collaborating Authors

 total mystery


OpenAI's Sora Is a Total Mystery

The Atlantic - Technology

Yesterday afternoon, OpenAI teased Sora, a video-generation model that promises to convert written text prompts into highly realistic videos. Footage released by the company depicts such examples as "a Shiba Inu dog wearing a beret and black turtleneck" and "in an ornate, historical hall, a massive tidal wave peaks and begins to crash." The excitement from the press has been reminiscent of the buzz surrounding the image creator DALL-E or ChatGPT in 2022: Sora is described as "eye-popping," "world-changing," and "breathtaking, yet terrifying." The imagery is genuinely impressive. At a glance, one example of an animated "fluffy monster" looks better than Shrek; an "extreme close up" of a woman's eye, complete with a reflection of the scene in front of her, is startlingly lifelike.


Unlocking Business Value from Machine Learning: Model Interpretability

#artificialintelligence

For the same reason that star players make bad coaches, models that make complicated decisions at high levels of abstraction come at a price, they can't easily explain their reasoning. This is a direct (and sometimes expensive) tradeoff. There is a similar paradigm for machine learning. The more powerful the model, the harder it is to interpret its inner workings. Sure, you may get a more accurate answer from a neural network, but how it arrived at that answer may be a total mystery. This can be a problem when trying to figure out what went wrong or how to improve it.